首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   89504篇
  免费   4098篇
  国内免费   4105篇
电工技术   4010篇
技术理论   5篇
综合类   8465篇
化学工业   12668篇
金属工艺   5625篇
机械仪表   3014篇
建筑科学   4273篇
矿业工程   1247篇
能源动力   2737篇
轻工业   6646篇
水利工程   1929篇
石油天然气   4096篇
武器工业   687篇
无线电   6708篇
一般工业技术   14037篇
冶金工业   2465篇
原子能技术   2177篇
自动化技术   16918篇
  2024年   66篇
  2023年   280篇
  2022年   400篇
  2021年   621篇
  2020年   1012篇
  2019年   966篇
  2018年   1089篇
  2017年   1013篇
  2016年   1524篇
  2015年   2164篇
  2014年   3929篇
  2013年   4713篇
  2012年   4016篇
  2011年   4660篇
  2010年   3902篇
  2009年   5276篇
  2008年   5271篇
  2007年   5631篇
  2006年   5165篇
  2005年   4328篇
  2004年   3731篇
  2003年   3654篇
  2002年   3706篇
  2001年   2767篇
  2000年   3157篇
  1999年   2933篇
  1998年   2472篇
  1997年   2356篇
  1996年   2507篇
  1995年   2651篇
  1994年   2407篇
  1993年   1462篇
  1992年   1488篇
  1991年   1022篇
  1990年   748篇
  1989年   664篇
  1988年   633篇
  1987年   376篇
  1986年   224篇
  1985年   370篇
  1984年   411篇
  1983年   429篇
  1982年   328篇
  1981年   404篇
  1980年   271篇
  1979年   114篇
  1978年   112篇
  1977年   69篇
  1976年   40篇
  1975年   55篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
91.
As an unsupervised learning method, stochastic competitive learning is commonly used for community detection in social network analysis. Compared with the traditional community detection algorithms, it has the advantage of realizing the timeseries community detection by simulating the community formation process. In order to improve the accuracy and solve the problem that several parameters in stochastic competitive learning need to be pre-set, the author improves the algorithms and realizes improved stochastic competitive learning by particle position initialization, parameter optimization and particle domination ability self-adaptive. The experiment result shows that each improved method improves the accuracy of the algorithm, and the F1 score of the improved algorithm is 9.07% higher than that of original algorithm.  相似文献   
92.
In this paper, the supervised Deep Neural Network (DNN) based signal detection is analyzed for combating with nonlinear distortions efficiently and improving error performances in clipping based Orthogonal Frequency Division Multiplexing (OFDM) ssystem. One of the main disadvantages for the OFDM is the high Peak to Average Power Ratio (PAPR). The clipping is a simple method for the PAPR reduction. However, an effect of the clipping is nonlinear distortion, and estimations for transmitting symbols are difficult despite a Maximum Likelihood (ML) detection at the receiver. The DNN based online signal detection uses the offline learning model where all weights and biases at fullyconnected layers are set to overcome nonlinear distortions by using training data sets. Thus, this paper introduces the required processes for the online signal detection and offline learning, and compares error performances with the ML detection in the clipping-based OFDM systems. In simulation results, the DNN based signal detection has better error performance than the conventional ML detection in multi-path fading wireless channel. The performance improvement is large as the complexity of system is increased such as huge Multiple Input Multiple Output (MIMO) system and high clipping rate.  相似文献   
93.
In reliability analysis, the stress-strength model is often used to describe the life of a component which has a random strength (X) and is subjected to a random stress (Y). In this paper, we considered the problem of estimating the reliability R=P [Y<X] when the distributions of both stress and strength are independent and follow exponentiated Pareto distribution. The maximum likelihood estimator of the stress strength reliability is calculated under simple random sample, ranked set sampling and median ranked set sampling methods. Four different reliability estimators under median ranked set sampling are derived. Two estimators are obtained when both strength and stress have an odd or an even set size. The two other estimators are obtained when the strength has an odd size and the stress has an even set size and vice versa. The performances of the suggested estimators are compared with their competitors under simple random sample via a simulation study. The simulation study revealed that the stress strength reliability estimates based on ranked set sampling and median ranked set sampling are more efficient than their competitors via simple random sample. In general, the stress strength reliability estimates based on median ranked set sampling are smaller than the corresponding estimates under ranked set sampling and simple random sample methods.  相似文献   
94.
In this article, a new generalization of the inverse Lindley distribution is introduced based on Marshall-Olkin family of distributions. We call the new distribution, the generalized Marshall-Olkin inverse Lindley distribution which offers more flexibility for modeling lifetime data. The new distribution includes the inverse Lindley and the Marshall-Olkin inverse Lindley as special distributions. Essential properties of the generalized Marshall-Olkin inverse Lindley distribution are discussed and investigated including, quantile function, ordinary moments, incomplete moments, moments of residual and stochastic ordering. Maximum likelihood method of estimation is considered under complete, Type-I censoring and Type-II censoring. Maximum likelihood estimators as well as approximate confidence intervals of the population parameters are discussed. A comprehensive simulation study is done to assess the performance of estimates based on their biases and mean square errors. The notability of the generalized Marshall-Olkin inverse Lindley model is clarified by means of two real data sets. The results showed the fact that the generalized Marshall-Olkin inverse Lindley model can produce better fits than power Lindley, extended Lindley, alpha power transmuted Lindley, alpha power extended exponential and Lindley distributions.  相似文献   
95.
Due to its outstanding ability in processing large quantity and high-dimensional data, machine learning models have been used in many cases, such as pattern recognition, classification, spam filtering, data mining and forecasting. As an outstanding machine learning algorithm, K-Nearest Neighbor (KNN) has been widely used in different situations, yet in selecting qualified applicants for winning a funding is almost new. The major problem lies in how to accurately determine the importance of attributes. In this paper, we propose a Feature-weighted Gradient Decent K-Nearest Neighbor (FGDKNN) method to classify funding applicants in to two types: approved ones or not approved ones. The FGDKNN is based on a gradient decent learning algorithm to update weight. It updatesthe weight of labels by minimizing error ratio iteratively, so that the importance of attributes can be described better. We investigate the performance of FGDKNN with Beijing Innofund. The results show that FGDKNN performs about 23%, 20%, 18%, 15% better than KNN, SVM, DT and ANN, respectively. Moreover, the FGDKNN has fast convergence time under different training scales, and has good performance under different settings.  相似文献   
96.
Host cardinality estimation is an important research field in network management and network security. The host cardinality estimation algorithm based on the linear estimator array is a common method. Existing algorithms do not take memory footprint into account when selecting the number of estimators used by each host. This paper analyzes the relationship between memory occupancy and estimation accuracy and compares the effects of different parameters on algorithm accuracy. The cardinality estimating algorithm is a kind of random algorithm, and there is a deviation between the estimated results and the actual cardinalities. The deviation is affected by some systematical factors, such as the random parameters inherent in linear estimator and the random functions used to map a host to different linear estimators. These random factors cannot be reduced by merging multiple estimators, and existing algorithms cannot remove the deviation caused by such factors. In this paper, we regard the estimation deviation as a random variable and proposed a sampling method, recorded as the linear estimator array step sampling algorithm (L2S), to reduce the influence of the random deviation. L2S improves the accuracy of the estimated cardinalities by evaluating and remove the expected value of random deviation. The cardinality estimation algorithm based on the estimator array is a computationally intensive algorithm, which takes a lot of time when processing high-speed network data in a serial environment. To solve this problem, a method is proposed to port the cardinality estimating algorithm based on the estimator array to the Graphics Processing Unit (GPU). Experiments on real-world highspeed network traffic show that L2S can reduce the absolute bias by more than 22% on average, and the extra time is less than 61 milliseconds on average.  相似文献   
97.
Single image super resolution (SISR) is an important research content in the field of computer vision and image processing. With the rapid development of deep neural networks, different image super-resolution models have emerged. Compared to some traditional SISR methods, deep learning-based methods can complete the superresolution tasks through a single image. In addition, compared with the SISR methods using traditional convolutional neural networks, SISR based on generative adversarial networks (GAN) has achieved the most advanced visual performance. In this review, we first explore the challenges faced by SISR and introduce some common datasets and evaluation metrics. Then, we review the improved network structures and loss functions of GAN-based perceptual SISR. Subsequently, the advantages and disadvantages of different networks are analyzed by multiple comparative experiments. Finally, we summarize the paper and look forward to the future development trends of GAN-based perceptual SISR.  相似文献   
98.
Cyberattacks on the Industrial Control System (ICS) have recently been increasing, made more intelligent by advancing technologies. As such, cybersecurity for such systems is attracting attention. As a core element of control devices, the Programmable Logic Controller (PLC) in an ICS carries out on-site control over the ICS. A cyberattack on the PLC will cause damages on the overall ICS, with Stuxnet and Duqu as the most representative cases. Thus, cybersecurity for PLCs is considered essential, and many researchers carry out a variety of analyses on the vulnerabilities of PLCs as part of preemptive efforts against attacks. In this study, a vulnerability analysis was conducted on the XGB PLC. Security vulnerabilities were identified by analyzing the network protocols and memory structure of PLCs and were utilized to launch replay attack, memory modulation attack, and FTP/Web service account theft for the verification of the results. Based on the results, the attacks were proven to be able to cause the PLC to malfunction and disable it, and the identified vulnerabilities were defined.  相似文献   
99.
Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, overcoming the weaknesses of conventional phrase-based translation systems. Although NMT based systems have gained their popularity in commercial translation applications, there is still plenty of room for improvement. Being the most popular search algorithm in NMT, beam search is vital to the translation result. However, traditional beam search can produce duplicate or missing translation due to its target sequence selection strategy. Aiming to alleviate this problem, this paper proposed neural machine translation improvements based on a novel beam search evaluation function. And we use reinforcement learning to train a translation evaluation system to select better candidate words for generating translations. In the experiments, we conducted extensive experiments to evaluate our methods. CASIA corpus and the 1,000,000 pairs of bilingual corpora of NiuTrans are used in our experiments. The experiment results prove that the proposed methods can effectively improve the English to Chinese translation quality.  相似文献   
100.
The occurrence of perioperative heart failure will affect the quality of medical services and threaten the safety of patients. Existing methods depend on the judgment of doctors, the results are affected by many factors such as doctors’ knowledge and experience. The accuracy is difficult to guarantee and has a serious lag. In this paper, a mixture prediction model is proposed for perioperative adverse events of heart failure, which combined with the advantages of the Deep Pyramid Convolutional Neural Networks (DPCNN) and Extreme Gradient Boosting (XGBOOST). The DPCNN was used to automatically extract features from patient’s diagnostic texts, and the text features were integrated with the preoperative examination and intraoperative monitoring values of patients, then the XGBOOST algorithm was used to construct the prediction model of heart failure. An experimental comparison was conducted on the model based on the data of patients with heart failure in southwest hospital from 2014 to 2018. The results showed that the DPCNN-XGBOOST model improved the predictive sensitivity of the model by 3% and 31% compared with the text-based DPCNN Model and the numeric-based XGBOOST Model.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号